User:Merge bot
This user account is a bot that uses PHP, operated by Wbm1058 (talk). It is used to make repetitive automated or semi-automated edits that would be extremely tedious to do manually, in accordance with the bot policy. The bot is approved and currently active – the relevant request for approval can be seen here. Administrators: if this bot is malfunctioning or causing harm, please block it. |
Bots by wbm1058: RMCD bot • Merge bot • Bot1058 |
| ||||||||||||||
| ||||||||||||||
Tasks
Bot Task | Status | Description | Activity |
---|---|---|---|
Task 1 | Approved. | Maintains Wikipedia:Proposed mergers/Log and its subpages | -Active |
Task 2 | Approved. | History-merge categories which were moved by User:Cydebot between April 2006 and March 2015 | -Active |
Task 1
[edit]This bot account is responsible for maintaining Wikipedia:Proposed mergers/Log and its subpages, which are derived from Category:Articles to be merged, for the benefit of Wikipedia:Proposed mergers and Wikipedia:WikiProject Merge.
It is a revived fork of RFC bot's automated list of proposed mergers, which stopped working after August 2011. The log files created by that bot operation were proposed for deletion in March 2012. Fortunately they weren't deleted as a result of that proposal, enabling me to find them and then revive the operation under a new bot. Normally these logs are deleted after being emptied via resolving all merge proposals for a given month, and then tagging them with {{db-g6|rationale=this is a maintenance page from a previous month that was only intended to contain outstanding entries, and no outstanding entries remain}}
. For example, see the deleted Wikipedia:Proposed mergers/Log/June 2008.
Merge bot task 1 generally runs twice daily (every 12 hours). Occasionally it misses a run, because it crashes with errors such as this from its API: Fatal error: Uncaught Exception: HTTP Error. When this task was approved in May 2013, a typical run took 1 hr, 22 min. By February 2017, it typically ran within just 20 minutes. The work queue is somewhat shorter now, but I'm guessing improved back-end hardware and/or software performance is also responsible for the shorter processing times. In May 2019 I increased the frequency to twice daily.